Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Craniofac Surg ; 2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38666772

RESUMO

This retrospective cross-sectional study reviewed adult patients with operated cleft lip and/or palate (CL/P) and normal control, and performed comprehensive craniofacial and nasal morphological analyses based on lateral cephalometric radiographs. Pearson or Spearman correlation coefficient assessed intraclass correlation. Seven hundred fifty-seven operated patients with CL/P, and 165 noncleft normal controls were enrolled. Among the normal and CL/P groups, S-N-A angle registered positive correlations with nasal base prominence (S-N'-Sn, degrees). Upper facial height (N-ANS, mm) had positive correlations with nasal dorsum length (N'-Prn, mm) and nasal bone length (N-Na, mm). Although in patients with bilateral cleft lip and palate, there were moderate negative correlations (r=-0.541, P<0.05) with soft tissue facial profile angle (FH-N'Pog', degree) and nasolabial angle (Cm-Sn-ULA, degree). Correlation exists between the morphology of jaw bones and external nose among patients with CL/P. Maxillary sagittal insufficiency is associated with concave nasal profile, and maxilla height is associated with nasal length.

2.
J Craniomaxillofac Surg ; 51(11): 702-707, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37741800

RESUMO

This retrospective cross-sectional study reviewed adult patients with unrepaired SMCP, OCP and normal control and performed comprehensive skeletal and soft tissue morphological analyses basing on lateral cephalometric radiographs. One way-ANOVA and rank-sum tests detected potential intergroup differences. 32 subjects with unrepaired SMCP, 42 with unrepaired OCP and 28 noncleft normal controls were enrolled. Both the SMCP and OCP groups were significantly different from the normal controls in sagittal maxillary length, jaw relationship, facial profile angle, nasal base and nasal tip prominence, upper lip position, and lower lip protrusion. S-N-A angle in the control group (82.25 ± 2.74°) was significantly greater than in the SMCP (77.96 ± 4.05°, p<0.001) and OCP (78.55 ± 2.93°, p<0.001) groups. Nasolabial angle in the control group (99.18 ± 8.76°) was significantly greater than in the SMCP (91.75 ± 8.93°, p = 0.002) and OCP (93.69 ± 7.24°, p = 0.020) groups. No significant difference was detected between the SMCP and the OCP group in other measurements except upper facial height. Within the limitations of the study it seems that craniofacial growth is impaired in patients with submucous clefts to the same extent as in patients with a conventional cleft palate.


Assuntos
Fenda Labial , Fissura Palatina , Humanos , Adulto , Fissura Palatina/diagnóstico por imagem , Fissura Palatina/cirurgia , Estudos Retrospectivos , Estudos Transversais , Cefalometria , Fenda Labial/cirurgia
3.
Front Pediatr ; 11: 1187224, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37609363

RESUMO

Marginal velopharyngeal inadequacy (MVPI) is a particular status of velopharyngeal closure after cleft palate repair. The physiological and phonological characteristics of patients with MVPI are significantly different from those with typical velopharyngeal insufficiency. The pathological mechanisms and diagnostic criteria of MVPI are still controversial, and there is limited evidence to guide the selection of surgical and non- surgical management options and a lack of recognized standards for treatment protocols. Based on a systematic study of the relevant literatures, this review identifies specific problems that are currently under-recognized in the diagnosis and treatment of MVPI and provides guidelines for further exploration of standardized and reasonable intervention protocols for MVPI.

4.
Artigo em Inglês | MEDLINE | ID: mdl-32396088

RESUMO

Facial expression recognition, face synthesis, and face alignment are three coherently related tasks and can be solved in a joint framework. To achieve this goal, in this paper, we propose a novel end-to-end deep learning model by exploiting the expression code, geometry code and generated data jointly for simultaneous pose-invariant facial expression recognition, face image synthesis, and face alignment. The proposed deep model enjoys several merits. First, to the best of our knowledge, this is the first work to address these three tasks jointly in a unified deep model to complement and enhance each other. Second, the proposed model can effectively disentangle the global and local identity representation from different expression and geometry codes. As a result, it can automatically generate facial images with different expressions under arbitrary geometry codes. Third, these three tasks can further boost their performance for each other via our model. Extensive experimental results on three standard benchmarks demonstrate that the proposed deep model performs favorably against state-of-the-art methods on the three tasks.

5.
Neural Netw ; 125: 104-120, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32087390

RESUMO

Collaborative representation-based classification (CRC) is a famous representation-based classification method in pattern recognition. Recently, many variants of CRC have been designed for many classification tasks with the good classification performance. However, most of them ignore the inter-class pattern discrimination among the class-specific representations, which is very critical for strengthening the pattern discrimination of collaborative representation (CR). In this article, we propose a novel CR approach for image classification, called weighted discriminative collaborative competitive representation (WDCCR). The proposed WDCCR designs the discriminative and competitive collaborative representation among all the classes by fully considering the class information. On the one hand, we incorporate two discriminative constraints into the unified WDCCR model. Both constraints are the competitive class-specific representation residuals and the pairs of class-specific representations for each query sample. On the other hand, the constraint of the weighted categorical representation coefficients is introduced into the proposed model for further enhancing the power of discriminative and competitive representation. In the weighted constraint, we assume that the different classes of each query sample should have less contribution to the representation with the small representation coefficients, and then two types of weight factors are designed to constrain the representation coefficients. Furthermore, the robust WDCCR (R-WDCCR) is proposed with l1-norm representation fidelity for recognizing noisy images. Extensive experiments on six image data sets demonstrate the effective and robust superiorities of the proposed WDCCR and R-WDCCR over the related state-of-the-art representation-based classification methods.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Classificação/métodos , Processamento de Imagem Assistida por Computador/normas , Reconhecimento Automatizado de Padrão/normas
6.
Artigo em Inglês | MEDLINE | ID: mdl-32070956

RESUMO

Driven by recent advances in human-centered computing, Facial Expression Recognition (FER) has attracted significant attention in many applications. However, most conventional approaches either perform face frontalization on a non-frontal facial image or learn separate classifier for each pose. Different from existing methods, this paper proposes an end-to-end deep learning model that allows to simultaneous facial image synthesis and pose-invariant facial expression recognition by exploiting shape geometry of the face image. The proposed model is based on generative adversarial network (GAN) and enjoys several merits. First, given an input face and a target pose and expression designated by a set of facial landmarks, an identity-preserving face can be generated through guiding by the target pose and expression. Second, the identity representation is explicitly disentangled from both expression and pose variations through the shape geometry delivered by facial landmarks. Third, our model can automatically generate face images with different expressions and poses in a continuous way to enlarge and enrich the training set for the FER task. Our approach is demonstrated to perform well when compared with state-of-the-art algorithms on both controlled and in-the-wild benchmark datasets including Multi-PIE, BU-3DFE, and SFEW.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...